Although significant progress has been made in face recognition, demographic bias still exists in face recognition systems. For instance, it usually happens that the face recognition performance for a certain demographic group is lower than the others. In this paper, we propose MixFairFace framework to improve the fairness in face recognition models. First of all, we argue that the commonly used attribute-based fairness metric is not appropriate for face recognition. A face recognition system can only be considered fair while every person has a close performance. Hence, we propose a new evaluation protocol to fairly evaluate the fairness performance of different approaches. Different from previous approaches that require sensitive attribute labels such as race and gender for reducing the demographic bias, we aim at addressing the identity bias in face representation, i.e., the performance inconsistency between different identities, without the need for sensitive attribute labels. To this end, we propose MixFair Adapter to determine and reduce the identity bias of training samples. Our extensive experiments demonstrate that our MixFairFace approach achieves state-of-the-art fairness performance on all benchmark datasets.
translated by 谷歌翻译
由于球形摄像机的兴起,单眼360深度估计成为许多应用(例如自主系统)的重要技术。因此,提出了针对单眼360深度估计的最新框架,例如Bifuse中的双预测融合。为了训练这样的框架,需要大量全景以及激光传感器捕获的相应深度地面真相,这极大地增加了数据收集成本。此外,由于这样的数据收集过程是耗时的,因此将这些方法扩展到不同场景的可扩展性成为一个挑战。为此,从360个视频中进行单眼深度估计网络的自我培训是减轻此问题的一种方法。但是,没有现有的框架将双投射融合融合到自我训练方案中,这极大地限制了自我监督的性能,因为Bi-Prodoction Fusion可以利用来自不同投影类型的信息。在本文中,我们建议Bifuse ++探索双投影融合和自我训练场景的组合。具体来说,我们提出了一个新的融合模块和对比度感知的光度损失,以提高Bifuse的性能并提高对现实世界视频的自我训练的稳定性。我们在基准数据集上进行了监督和自我监督的实验,并实现最先进的性能。
translated by 谷歌翻译
尽管已显示自我监督的学习受益于许多视觉任务,但现有技术主要集中在图像级操作上,这可能无法很好地概括为补丁或像素级别的下游任务。此外,现有的SSL方法可能无法充分描述和关联图像量表内和跨图像量表的上述表示。在本文中,我们提出了一个自制的金字塔表示学习(SS-PRL)框架。所提出的SS-PRL旨在通过学习适当的原型在斑块级别得出金字塔表示,并在图像中观察和关联固有的语义信息。特别是,我们在SS-PRL中提出了跨尺度贴片级的相关性学习,该学习允许模型汇总和关联信息跨贴片量表。我们表明,借助我们提出的用于模型预训练的SS-PRL,可以轻松适应和调整模型,以适应各种应用程序,包括多标签分类,对象检测和实例分割。
translated by 谷歌翻译
少量分类旨在执行分类,因为只有利息类别的标记示例。尽管提出了几种方法,但大多数现有的几次射击学习(FSL)模型假设基础和新颖类是从相同的数据域中汲取的。在识别在一个看不见的域中的新型类数据方面,这成为域广义少量分类的更具挑战性的任务。在本文中,我们为域广义的少量拍摄分类提供了一个独特的学习框架,其中基类来自同质的多个源域,而要识别的新类是来自训练期间未见的目标域。通过推进元学习策略,我们的学习框架跨越多个源域利用数据来捕获域不变的功能,通过基于度量学习的机制跨越支持和查询数据来引入FSL能力。我们进行广泛的实验,以验证我们提出的学习框架和展示从小但同质源数据的效果,能够优选地对来自大规模的学习来执行。此外,我们为域广泛的少量分类提供了骨干模型的选择。
translated by 谷歌翻译
几次拍摄的语义分割解决了学习任务,其中只有几个具有地面真理像素级标签的图像可用于新颖的感兴趣的景点。通常需要将大量数据(即基类)收集具有这样的地面真理信息,然后是元学习策略来解决上述学习任务。当在训练和测试期间只能观察到图像级语义标签时,它被认为是弱监督少量语义细分的更具挑战性的任务。为了解决这个问题,我们提出了一种新的元学习框架,其预测来自有限量的数据和它们的语义标签的伪像素级分段掩模。更重要的是,我们的学习方案进一步利用了具有分段保证的查询图像输入的产生的像素级信息。因此,我们提出的学习模型可以被视为像素级元学习者。通过对基准数据集的广泛实验,我们表明我们的模型在完全监督的环境下实现了令人满意的性能,但在弱势监督的环境下对最先进的方法进行了有利的方法。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译